28 - Recap Clip 6.?: Modeling Time and Uncertainty [ID:30430]
50 von 75 angezeigt

Hello everybody, I hope all of you are sober again and we can do some temporal probability

models. Remember that we started looking at modeling with time involved. Everything essentially

we had done before that assumed a static world essentially. Realistic for some things like

diagnosing your automobile or something like that. The world or the state of the automobile

is not actually going to change very much during the time of deliberation. But very often that's

not the case and that's what we want to look at now. And so for that we need to have some kind of

a notion of a time and to keep things maximally general in principle we're just going to say we

have a time structure which is essentially a partially ordered set. Most people think of time

as being linear. Here in general you don't need to do that. There's notions of a branching future

which means and there are many possible futures or even a branching past. The only thing that you

really don't want in time structures is loops and so on. So you don't really believe in Groundhog

Day or something like this. So we do want the time to kind of be progressing with a partial order.

And so what we're going to do is we're going to do exactly the same as we did before except that

we're now going to index all the random variables we're talking about with time. Simple thing.

Except of course that we're not limiting the size of this set. In particular what we're going to do

is we're going to use a linearly ordered time namely the natural numbers with the usual lesser

equal ordering. Think of some kind of a clock that goes ticking along and every time a tick we have

a new situation. Where the kind of length of clock ticks really depends on the application.

Some worlds we care about change daily, some of them annually, some of them in microseconds.

So you have to kind of put that into your modeling. But essentially we have linear discrete time.

And of course we want to make our life simple and one of the things that makes life simple

is if we bound the number of influences from past times.

Okay and so we have a very natural way of thinking about these things and we say that we usually want

to have something like a first over Markov property which limits the influences from the past to one.

Which is usually though not necessarily one time back, one tick, clock tick back.

You can have higher order Markov properties and where you limit for instance here incoming influences to two.

Which are usually better models but our algorithms of course get more complex, have more complex run times.

So we are mostly going to only use first order Markov properties.

And that was the one thing and our example was this umbrella example where we had a hidden set of variables

which is whether it rains today and a part of the observation and we can make observations

whether there is an umbrella that the director brings.

Very importantly these are unobservable to this prison guard.

And if we want to do it in a first order Markov way this is the very natural topology we are getting.

It's not totally accurate but it's a nice example, it's not totally unaccurate.

So if you think of this here as a Bayesian network then of course it's infinite.

We can make it finite by chopping off everything saying well lifetime of the guard.

But that's not the case, even if we chop it, if we make it finite by brute force then it's still going to be practically infinite.

Say the guard works there for 40 years in his life then we have something like 1200 days.

No, 10,000 days, 12,000 days, something like this.

So we have to do things and what we are doing is essentially something that's already apparent here.

We can essentially time slice and get something extremely simple here.

The idea here is to make things simple by saying that all the time slices are essentially equal

and that the transition probabilities are always equal.

And that's something I would like you to think a little bit more about.

We have two kinds of things we can put at the arrow.

The first thing here is what is the transition probabilities between the events of raining today and raining yesterday.

And even though in a normal Bayesian network those could all be different,

we are going to say this is stationary if and only if these transitions are always the same.

And that's really, if you think about the agent, this is the model about how the world evolves.

That's the transition model, something we've seen before.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:11:11 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 11:57:28

Sprache

en-US

Recap: Modeling Time and Uncertainty

Main video on the topic is not existent yet.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen